In logic and proof theory, natural deduction is a kind of proof calculus in which logical reasoning is expressed by inference rules closely related to the "natural" way of reasoning. This contrasts with Hilbert system, which instead use as much as possible to express the logical laws of deductive reasoning.
Natural deduction in its modern form was independently proposed by the German mathematician Gerhard Gentzen in 1933, in a dissertation delivered to the faculty of mathematical sciences of the University of Göttingen., . The term natural deduction (or rather, its German equivalent natürliches Schließen) was coined in that paper:
Gentzen was motivated by a desire to establish the consistency of number theory. He was unable to prove the main result required for the consistency result, the cut elimination theorem—the Hauptsatz—directly for natural deduction. For this reason he introduced his alternative system, the sequent calculus, for which he proved the Hauptsatz both for Classical logic and intuitionistic logic. In a series of seminars in 1961 and 1962 Dag Prawitz gave a comprehensive summary of natural deduction calculi, and transported much of Gentzen's work with sequent calculi into the natural deduction framework. His 1965 monograph Natural deduction: a proof-theoretical study, . was to become a reference work on natural deduction, and included applications for modal logic and second-order logic.
In natural deduction, a proposition is deduced from a collection of premises by applying inference rules repeatedly. The system presented in this article is a minor variation of Gentzen's or Prawitz's formulation, but with a closer adherence to Martin-Löf's description of logical judgments and connectives.
+ Notational variants of the connectives | |
AND | , , , , |
equivalent | , , |
implies | , , |
Sheffer stroke | , , |
nonequivalent | , , |
Logical NOR | , , |
Negation | , , , |
OR | , , , |
XNOR gate | XNOR |
Exclusive or | , |
In Gentzen's notation, this would be written like this:
The premises are shown above a line, called the inference line, separated by a comma, which indicates combination of premises. The conclusion is written below the inference line. The inference line represents syntactic consequence, sometimes called deductive consequence, which is also symbolized with ⊢. So the above can also be written in one line as . (The turnstile, for syntactic consequence, is of lower precedence than the comma, which represents premise combination, which in turn is of lower precedence than the arrow, used for material implication; so no parentheses are needed to interpret this formula.)
Syntactic consequence is contrasted with semantic consequence, which is symbolized with ⊧. In this case, the conclusion follows syntactically because natural deduction is a syntactic proof system, which assumes inference rules Postulate.
Gentzen's style will be used in much of this article. Gentzen's discharging annotations used to internalise hypothetical judgments can be avoided by representing proofs as a tree of Γ ⊢A instead of a tree of judgments that A (is true).
+Suppes–Lemmon style proof (first example) | ||||
1 | 1 | A | ||
2 | 2 | A | ||
3 | 3 | A | ||
1, 3 | 4 | 1, 3 →E | ||
1, 2 | 5 | 2, 4 RAA | ||
Q.E.D | ||||
Negation () is defined as implication to falsity
Older publications, and publications that do not focus on logical systems like minimal logic, intuitionistic or , take negation as a primitive logical connective, meaning it is assumed as a basic operation and not defined in terms of other connectives. Some authors, such as Bostock, use and , and also define as primitives.This is required in paraconsistent logics that do not treat and as equivalents.
A different notational convention sees the language's syntax as a categorial grammar with the single category "formula", denoted by the symbol . So any elements of the syntax are introduced by categorizations, for which the notation is , meaning " is an expression for an object in the category ." The sentence-letters, then, are introduced by categorizations such as , , , and so on; the connectives, in turn, are defined by statements similar to the above, but using categorization notation, as seen below:
+ Connectives defined through a categorial grammar | |||
Define negation as .
The following is a list of primitive inference rules for natural deduction in propositional logic:
+ Rules for propositional logic ! Introduction rules ! Elimination rules | |
Goal:
Proof:
[P]^v \qquad [P \to \bot]^u}{
\bot}\to_{E} } {\cfrac{
((P \to \bot) \to \bot)}{
P \to ((P \to \bot) \to \bot)}\to_{I^v} }\to_{I^u}
Example 2: Proof, within minimal logic, of :
\cfrac{\cfrac{[A]^u \quad [B]^w}{A \land B}\ \land_I}{ \cfrac{B \to \left ( A \land B \right )}{ A \to \left ( B \to \left ( A \land B \right ) \right ) }\ \to_{I^u} }\ \to_{I^w}
Later logicians and educators such as Patrick Suppes and John Lemmon rebranded Fitch's system. While they introduced graphical changes—such as replacing indentation with vertical bars—the underlying structure of Fitch-style natural deduction remained intact. These variations are often referred to as the Suppes–Lemmon format, though they are fundamentally based on Fitch's original notation.
Suppes introduced natural deduction using Gentzen-style rules.
Lemmon formalized more derived rules. He as well defined negation as implication to falsity: . This is not stated as a formal definition in Beginning Logic, but it is implicitly assumed throughout the system. Here's how we know:
In the table below, based on Lemmon (1978) and Allen & Hand (2022), Lemmon's derived rules are highlighted. They can be derived from the (non-highlighted) Gerhard Gentzen rules.
There are nine primitive rules of proof, which are the rule assumption, plus four pairs of introduction and elimination rules for the binary connectives, and the rules of double negation and reductio ad absurdum, of which only one is needed. Disjunctive Syllogism can be used as an easier alternative to the proper ∨-elimination, and MTT is a commonly given rule, although it is not primitive.
+ List of Inference Rules | |||||
Rule of Assumptions | Assumption | A | The current line number. | At any stage of the argument, introduce a proposition as an assumption of the argument. | |
Conjunction introduction | Ampersand introduction, conjunction (CONJ) | m, n &I | The union of the assumption sets at lines m and n. | From and at lines m and n, infer . | |
Conjunction elimination | Simplification (S), ampersand elimination | m &E | The same as at line m. | From at line m, infer and . | |
Disjunction introduction | Addition (ADD) | m ∨I | The same as at line m. | From at line m, infer , whatever may be. | |
Disjunction elimination | Wedge elimination, dilemma (DL) | j,k,l,m,n ∨E | The lines j,k,l,m,n. | From at line j, and an assumption of at line k, and a derivation of from at line l, and an assumption of at line m, and a derivation of from at line n, infer . | |
Arrow introduction | Conditional proof (CP), conditional introduction | n, →I (m) | Everything in the assumption set at line n, excepting m, the line where the antecedent was assumed. | From at line n, following from the assumption of at line m, infer . | |
Arrow elimination | Modus ponendo ponens (MPP), modus ponens (MP), conditional elimination | m, n →E | The union of the assumption sets at lines m and n. | From at line m, and at line n, infer . | |
Double negation | Double negation elimination | m DN | The same as at line m. | From at line m, infer . | |
Reductio ad absurdum | Indirect Proof (IP), negation introduction (¬I), negation elimination (¬E) | m, n RAA (k) | The union of the assumption sets at lines m and n, excluding k (the denied assumption). | From a sentence and its denial{A \wedge B\ }\wedge_I\end{aligned}
These notions correspond exactly to β-reduction (beta reduction) and η-conversion (eta conversion) in the lambda calculus, using the Curry–Howard isomorphism. By local completeness, we see that every derivation can be converted to an equivalent derivation where the principal connective is introduced. In fact, if the entire derivation obeys this ordering of eliminations followed by introductions, then it is said to be normal. In a normal derivation all eliminations happen above introductions. In most logics, every derivation has an equivalent normal derivation, called a normal form. The existence of normal forms is generally hard to prove using natural deduction alone, though such accounts do exist in the literature, most notably by Dag Prawitz in 1961.See also his book , . It is much easier to show this indirectly by means of a cut elimination sequent calculus presentation.
First and higher-order extensions
and For propositions, we consider a third countable set P of predicates, and define atomic predicates over terms with the following formation rule:
The first two rules of formation provide a definition of a term that is effectively the same as that defined in term algebra and model theory, although the focus of those fields of study is quite different from natural deduction. The third rule of formation effectively defines an atomic formula, as in first-order logic, and again in model theory. To these are added a pair of formation rules, defining the notation for quantified propositions; one for universal (∀) and existential (∃) quantification:
The universal quantifier has the introduction and elimination rules:
\cfrac{ \begin{array}{c} \cfrac{}{a : \mathcal{T}}\text{ u} \\ \vdots \\{}a/xA \end{array} \qquad \qquad \frac{\forall x.A \qquad t : \mathcal{T}}{t/xA}\;\forall_{E} The existential quantifier has the introduction and elimination rules: In these rules, the notation t/ x A stands for the substitution of t for every (visible) instance of x in A, avoiding capture. As before the superscripts on the name stand for the components that are discharged: the term a cannot occur in the conclusion of ∀I (such terms are known as eigenvariables or parameters), and the hypotheses named u and v in ∃E are localised to the second premise in a hypothetical derivation. Although the propositional logic of earlier sections was decidable, adding the quantifiers makes the logic undecidable. So far, the quantified extensions are first-order: they distinguish propositions from the kinds of objects quantified over. Higher-order logic takes a different approach and has only a single sort of propositions. The quantifiers have as the domain of quantification the very same sort of propositions, as reflected in the formation rules:
\cfrac{ \begin{matrix} \cfrac{}{p : \mathcal{F}}\hbox{ u} \\ \vdots \\ A: \mathcal{F} \\ {\exists p.A : \mathcal{F}} \;\exists_{F^u} A discussion of the introduction and elimination forms for higher-order logic is beyond the scope of this article. It is possible to be in-between first-order and higher-order logics. For example, second-order logic has two kinds of propositions, one kind quantifying over terms, and the second kind quantifying over propositions of the first kind.
Proofs and type theory | style="margin-left: 2em;" |
──── u1 ──── u2 ... ──── un J1 J2 Jn ⋮ J| width="10%" align="center" | ⇒ | | u1:J1, u2:J2, ..., un:Jn ⊢ J |
u ∈ V ─────── proof-F u proof| width="10%" | | | ───────────────────── hyp u:A ⊢ u : A |
π1 proof π2 proof ──────────────────── pair-F (π1, π2) proof| width="10%" | | | Γ ⊢ π1 : A Γ ⊢ π2 : B ───────────────────────── ∧I Γ ⊢ (π1, π2) : A ∧ B |
π proof ─────────── '''fst'''-F '''fst''' π proof| width="10%" | | | Γ ⊢ π : A ∧ B ───────────── ∧E1 Γ ⊢ '''fst''' π : A |
π proof ─────────── '''snd'''-F '''snd''' π proof| width="10%" | | | Γ ⊢ π : A ∧ B ───────────── ∧E2 Γ ⊢ '''snd''' π : B |
π proof ──────────── λ-F λu. π proof| width="10%" | | | Γ, u:A ⊢ π : B ───────────────── ⊃I Γ ⊢ λu. π : A ⊃ B |
π1 proof π2 proof ─────────────────── app-F π1 π2 proof| width="10%" | | | Γ ⊢ π1 : A ⊃ B Γ ⊢ π2 : A ──────────────────────────── ⊃E Γ ⊢ π1 π2 : B |
So far the judgment "Γ ⊢ π : A" has had a purely logical interpretation. In type theory, the logical view is exchanged for a more computational view of objects. Propositions in the logical interpretation are now viewed as types, and proofs as programs in the lambda calculus. Thus the interpretation of "π : A" is " the program π has type A". The logical connectives are also given a different reading: conjunction is viewed as product type (×), implication as the function function type (→), etc. The differences are only cosmetic, however. Type theory has a natural deduction presentation in terms of formation, introduction and elimination rules; in fact, the reader can easily reconstruct what is known as simple type theory from the previous sections.
The difference between logic and type theory is primarily a shift of focus from the types (propositions) to the programs (proofs). Type theory is chiefly interested in the convertibility or reducibility of programs. For every type, there are canonical programs of that type which are irreducible; these are known as canonical forms or values. If every program can be reduced to a canonical form, then the type theory is said to be normalising (or weakly normalising). If the canonical form is unique, then the theory is said to be strongly normalising. Normalisability is a rare feature of most non-trivial type theories, which is a big departure from the logical world. (Recall that almost every logical derivation has an equivalent normal derivation.) To sketch the reason: in type theories that admit recursive definitions, it is possible to write programs that never reduce to a value; such looping programs can generally be given any type. In particular, the looping program has type ⊥, although there is no logical proof of "⊥". For this reason, the propositions as types; proofs as programs paradigm only works in one direction, if at all: interpreting a type theory as a logic generally gives an inconsistent logic.
Γ ⊢ A type Γ, x:A ⊢ B type ───────────────────────────── Π-F Γ ⊢ Πx:A. B type| width="10%" | | | Γ ⊢ A type Γ, x:A ⊢ B type ──────────────────────────── Σ-F Γ ⊢ Σx:A. B type |
Γ, x:A ⊢ π : B ──────────────────── ΠI Γ ⊢ λx. π : Πx:A. B| width="10%" | | | Γ ⊢ π1 : Πx:A. B Γ ⊢ π2 : A ───────────────────────────── ΠE Γ ⊢ π1 π2 : [π2/x] B |
Γ ⊢ π1 : A Γ, x:A ⊢ π2 : B ───────────────────────────── ΣI Γ ⊢ (π1, π2) : Σx:A. B| width="10%" | | | Γ ⊢ π : Σx:A. B ──────────────── ΣE1 Γ ⊢ '''fst''' π : A |
Γ ⊢ π : Σx:A. B ──────────────────────── ΣE2 Γ ⊢ '''snd''' π : ['''fst''' π/x] B |
Since dependent type theories allow types to depend on programs, a natural question to ask is whether it is possible for programs to depend on types, or any other combination. There are many kinds of answers to such questions. A popular approach in type theory is to allow programs to be quantified over types, also known as parametric polymorphism; of this there are two main kinds: if types and programs are kept separate, then one obtains a somewhat more well-behaved system called predicative polymorphism; if the distinction between program and type is blurred, one obtains the type-theoretic analogue of higher-order logic, also known as impredicative polymorphism. Various combinations of dependency and polymorphism have been considered in the literature, the most famous being the lambda cube of Henk Barendregt.
The intersection of logic and type theory is a vast and active research area. New logics are usually formalised in a general type theoretic setting, known as a logical framework. Popular modern logical frameworks such as the calculus of constructions and LF are based on higher-order dependent type theory, with various trade-offs in terms of decidability and expressive power. These logical frameworks are themselves always specified as natural deduction systems, which is a testament to the versatility of the natural deduction approach.
This statement is not obviously either an introduction or an elimination; indeed, it involves two distinct connectives. Gentzen's original treatment of excluded middle prescribed one of the following three (equivalent) formulations, which were already present in analogous forms in the systems of David Hilbert and Arend Heyting:
────────────── XM1 A ∨ ¬A| width="5%" | | | ¬¬A ────────── XM2 A| width="5%" | | | ──────── ''u'' ¬A ⋮ ''p'' ────── XM3''u, p'' A |
(XM3 is merely XM2 expressed in terms of E.) This treatment of excluded middle, in addition to being objectionable from a purist's standpoint, introduces additional complications in the definition of normal forms.
A comparatively more satisfactory treatment of classical natural deduction in terms of introduction and elimination rules alone was first proposed by Michel Parigot in 1992 in the form of a classical lambda calculus called λμ. The key insight of his approach was to replace a truth-centric judgment A with a more classical notion, reminiscent of the sequent calculus: in localised form, instead of Γ ⊢ A, he used Γ ⊢ Δ, with Δ a collection of propositions similar to Γ. Γ was treated as a conjunction, and Δ as a disjunction. This structure is essentially lifted directly from classical sequent calculus, but the innovation in λμ was to give a computational meaning to classical natural deduction proofs in terms of a callcc or a throw/catch mechanism seen in LISP and its descendants. (See also: first class control.)
Another important extension was for modal logic and other logics that need more than just the basic judgment of truth. These were first described, for the alethic modal logics S4 and S5, in a natural deduction style by Dag Prawitz in 1965, and have since accumulated a large body of related work. To give a simple example, the modal logic S4 requires one new judgment, " A valid", that is categorical with respect to truth:
This categorical judgment is internalised as a unary connective ◻ A (read " necessarily A") with the following introduction and elimination rules:
A valid ──────── ◻I ◻ A| width="5%" | | | ◻ A ──────── ◻E A |
Note that the premise " A valid" has no defining rules; instead, the categorical definition of validity is used in its place. This mode becomes clearer in the localised form when the hypotheses are explicit. We write "Ω;Γ ⊢ A" where Γ contains the true hypotheses as before, and Ω contains valid hypotheses. On the right there is just a single judgment " A"; validity is not needed here since "Ω ⊢ A valid" is by definition the same as "Ω;⋅ ⊢ A". The introduction and elimination forms are then:
Ω;⋅ ⊢ π : A ──────────────────── ◻I Ω;⋅ ⊢ '''box''' π : ◻ A| width="5%" | | | Ω;Γ ⊢ π : ◻ A ────────────────────── ◻E Ω;Γ ⊢ '''unbox''' π : A |
The modal hypotheses have their own version of the hypothesis rule and substitution theorem.
─────────────────────────────── valid-hyp Ω, u: (A valid) ; Γ ⊢ u : A |
This framework of separating judgments into distinct collections of hypotheses, also known as multi-zoned or polyadic contexts, is very powerful and extensible; it has been applied for many different modal logics, and also for linear logic and other substructural logics, to give a few examples. However, relatively few systems of modal logic can be formalised directly in natural deduction. To give proof-theoretic characterisations of these systems, extensions such as labelling or systems of deep inference.
The addition of labels to formulae permits much finer control of the conditions under which rules apply, allowing the more flexible techniques of to be applied, as has been done in the case of labelled deduction. Labels also allow the naming of worlds in Kripke semantics; presents an influential technique for converting frame conditions of modal logics in Kripke semantics into inference rules in a natural deduction formalisation of hybrid logic. surveys the application of many proof theories, such as Avron and Pottinger's and Belnap's display logic to such modal logics as S5 and B.
In the sequent calculus all inference rules have a purely bottom-up reading. Inference rules can apply to elements on both sides of the turnstile. (To differentiate from natural deduction, this article uses a double arrow ⇒ instead of the right tack ⊢ for sequents.) The introduction rules of natural deduction are viewed as right rules in the sequent calculus, and are structurally very similar. The elimination rules on the other hand turn into left rules in the sequent calculus. To give an example, consider disjunction; the right rules are familiar:
Γ ⇒ A ───────── ∨R1 Γ ⇒ A ∨ B| with="10%" | | | Γ ⇒ B ───────── ∨R2 Γ ⇒ A ∨ B |
Γ, u:A ⇒ C Γ, v:B ⇒ C ─────────────────────────── ∨L Γ, w: (A ∨ B) ⇒ C |
Γ ⊢ A ∨ B Γ, u:A ⊢ C Γ, v:B ⊢ C ─────────────────────────────────────── ∨E Γ ⊢ C |
────── hyp | elim. rules | ↓ ────────────────────── ↑↓ meet ↑ | intro. rules | conclusion
| width="20%" align="center" | ⇒ |─────────────────────────── init ↑ ↑ | right rules | conclusion |
────────── init Γ, u:A ⇒ A |
It is clear by these theorems that the sequent calculus does not change the notion of truth, because the same collection of propositions remain true. Thus, one can use the same proof objects as before in sequent calculus derivations. As an example, consider the conjunctions. The right rule is virtually identical to the introduction rule
Γ ⇒ π1 : A Γ ⇒ π2 : B ─────────────────────────── ∧R Γ ⇒ (π1, π2) : A ∧ B| width="20%" | | | Γ ⊢ π1 : A Γ ⊢ π2 : B ───────────────────────── ∧I Γ ⊢ (π1, π2) : A ∧ B |
Γ, u:A ⇒ π : C ──────────────────────────────── ∧L1 Γ, v: (A ∧ B) ⇒ ['''fst''' v/u] π : C| width="20%" | | | Γ ⊢ π : A ∧ B ───────────── ∧E1 Γ ⊢ '''fst''' π : A |
Γ, u:B ⇒ π : C ──────────────────────────────── ∧L2 Γ, v: (A ∧ B) ⇒ ['''snd''' v/u] π : C| width="20%" | | | Γ ⊢ π : A ∧ B ───────────── ∧E2 Γ ⊢ '''snd''' π : B |
The kinds of proofs generated in the sequent calculus are therefore rather different from those of natural deduction. The sequent calculus produces proofs in what is known as the β-normal η-long form, which corresponds to a canonical representation of the normal form of the natural deduction proof. If one attempts to describe these proofs using natural deduction itself, one obtains what is called the intercalation calculus (first described by John Byrnes), which can be used to formally define the notion of a normal form for natural deduction.
The substitution theorem of natural deduction takes the form of a structural rule or structural theorem known as cut in the sequent calculus.
In most well behaved logics, cut is unnecessary as an inference rule, though it remains provable as a meta-theorem; the superfluousness of the cut rule is usually presented as a computational process, known as cut elimination. This has an interesting application for natural deduction; usually it is extremely tedious to prove certain properties directly in natural deduction because of an unbounded number of cases. For example, consider showing that a given proposition is not provable in natural deduction. A simple inductive argument fails because of rules like ∨E or E which can introduce arbitrary propositions. However, we know that the sequent calculus is complete with respect to natural deduction, so it is enough to show this unprovability in the sequent calculus. Now, if cut is not available as an inference rule, then all sequent rules either introduce a connective on the right or the left, so the depth of a sequent derivation is fully bounded by the connectives in the final conclusion. Thus, showing unprovability is much easier, because there are only a finite number of cases to consider, and each case is composed entirely of sub-propositions of the conclusion. A simple instance of this is the global consistency theorem: "⋅ ⊢ ⊥" is not provable. In the sequent calculus version, this is manifestly true because there is no rule that can have "⋅ ⇒ ⊥" as a conclusion! Proof theorists often prefer to work on cut-free sequent calculus formulations because of such properties.
|
|